477 research outputs found

    On Verifiable Sufficient Conditions for Sparse Signal Recovery via 1\ell_1 Minimization

    Full text link
    We propose novel necessary and sufficient conditions for a sensing matrix to be "ss-good" - to allow for exact 1\ell_1-recovery of sparse signals with ss nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect 1\ell_1-recovery (nonzero measurement noise, nearly ss-sparse signal, near-optimal solution of the optimization problem yielding the 1\ell_1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties

    A fast and accurate first-order algorithm for compressed sensing

    Get PDF
    This paper introduces a new, fast and accurate algorithm for solving problems in the area of compressed sensing, and more generally, in the area of signal and image reconstruction from indirect measurements. This algorithm is inspired by recent progress in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique. In particular, there is a crucial property thatmakes thesemethods extremely efficient for solving compressed sensing problems. Numerical experiments show the promising performance of our method to solve problems which involve the recovery of signals spanning a large dynamic range

    On the linear independence of spikes and sines

    Get PDF
    The purpose of this work is to survey what is known about the linear independence of spikes and sines. The paper provides new results for the case where the locations of the spikes and the frequencies of the sines are chosen at random. This problem is equivalent to studying the spectral norm of a random submatrix drawn from the discrete Fourier transform matrix. The proof involves depends on an extrapolation argument of Bourgain and Tzafriri.Comment: 16 pages, 4 figures. Revision with new proof of major theorem

    Analysis of Basis Pursuit Via Capacity Sets

    Full text link
    Finding the sparsest solution α\alpha for an under-determined linear system of equations Dα=sD\alpha=s is of interest in many applications. This problem is known to be NP-hard. Recent work studied conditions on the support size of α\alpha that allow its recovery using L1-minimization, via the Basis Pursuit algorithm. These conditions are often relying on a scalar property of DD called the mutual-coherence. In this work we introduce an alternative set of features of an arbitrarily given DD, called the "capacity sets". We show how those could be used to analyze the performance of the basis pursuit, leading to improved bounds and predictions of performance. Both theoretical and numerical methods are presented, all using the capacity values, and shown to lead to improved assessments of the basis pursuit success in finding the sparest solution of Dα=sD\alpha=s

    A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

    Full text link
    Rank minimization has attracted a lot of attention due to its robustness in data recovery. To overcome the computational difficulty, rank is often replaced with nuclear norm. For several rank minimization problems, such a replacement has been theoretically proven to be valid, i.e., the solution to nuclear norm minimization problem is also the solution to rank minimization problem. Although it is easy to believe that such a replacement may not always be valid, no concrete example has ever been found. We argue that such a validity checking cannot be done by numerical computation and show, by analyzing the noiseless latent low rank representation (LatLRR) model, that even for very simple rank minimization problems the validity may still break down. As a by-product, we find that the solution to the nuclear norm minimization formulation of LatLRR is non-unique. Hence the results of LatLRR reported in the literature may be questionable.Comment: accepted by ECML PKDD201

    Very high quality image restoration by combining wavelets and curvelets

    Get PDF
    We outline digital implementations of two newly developed multiscale representation systems, namely, the ridgelet and curvelet transforms. We apply these digital transforms to the problem of restoring an image from noisy data and compare our results with those obtained via well established methods based on the thresholding of wavelet coefficients. We develop a methodology to combine wavelets together these new systems to perform noise removal by exploiting all these systems simultaneously. The results of the combined reconstruction exhibits clear advantages over any individual system alone. For example, the residual error contains essentially no visually intelligible structure: no structure is lost in the reconstruction

    Guaranteed clustering and biclustering via semidefinite programming

    Get PDF
    Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest k-disjoint-clique problem, whose goal is to identify the collection of k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation for the biclustering problem with similar recovery guarantees. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest k-disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided

    Two-Photon Spiral Imaging with Correlated Orbital Angular Momentum States

    Full text link
    The concept of correlated two-photon spiral imaging is introduced. We begin by analyzing the joint orbital angular momentum (OAM) spectrum of correlated photon pairs. The mutual information carried by the photon pairs is evaluated, and it is shown that when an object is placed in one of the beam paths the value of the mutual information is strongly dependent on object shape and is closely related to the degree of rotational symmetry present. After analyzing the effect of the object on the OAM correlations, the method of correlated spiral imaging is described. We first present a version using parametric downconversion, in which entangled pairs of photons with opposite OAM values are produced, placing an object in the path of one beam. We then present a classical (correlated, but non-entangled) version. The relative problems and benefits of the classical versus entangled configurations are discussed. The prospect is raised of carrying out compressive imaging via twophoton OAM detection to reconstruct sparse objects with few measurements

    lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers

    Full text link
    We assume data sampled from a mixture of d-dimensional linear subspaces with spherically symmetric distributions within each subspace and an additional outlier component with spherically symmetric distribution within the ambient space (for simplicity we may assume that all distributions are uniform on their corresponding unit spheres). We also assume mixture weights for the different components. We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces. We study the recovery of the most significant subspace by minimizing the lp-averaged distances of data points from d-dimensional subspaces, where p>0. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0<p<=1, then for any fraction of outliers the most significant subspace can be recovered by lp minimization with overwhelming probability (which depends on the generating distribution and its parameters). We show that when adding small noise around the underlying subspaces the most significant subspace can be nearly recovered by lp minimization for any 0<p<=1 with an error proportional to the noise level. On the other hand, if p>1 and there is more than one underlying subspace, then with overwhelming probability the most significant subspace cannot be recovered or nearly recovered. This last result does not require spherically symmetric outliers.Comment: This is a revised version of the part of 1002.1994 that deals with single subspace recovery. V3: Improved estimates (in particular for Lemma 3.1 and for estimates relying on it), asymptotic dependence of probabilities and constants on D and d and further clarifications; for simplicity it assumes uniform distributions on spheres. V4: minor revision for the published versio

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore